Goto

Collaborating Authors

 Erath County


Automated and Holistic Co-design of Neural Networks and ASICs for Enabling In-Pixel Intelligence

Kharel, Shubha R., Mukim, Prashansa, Maj, Piotr, Deptuch, Grzegorz W., Yoo, Shinjae, Ren, Yihui, Mandal, Soumyajit

arXiv.org Artificial Intelligence

Extreme edge-AI systems, such as those in readout ASICs for radiation detection, must operate under stringent hardware constraints such as micron-level dimensions, sub-milliwatt power, and nanosecond-scale speed while providing clear accuracy advantages over traditional architectures. Finding ideal solutions means identifying optimal AI and ASIC design choices from a design space that has explosively expanded during the merger of these domains, creating non-trivial couplings which together act upon a small set of solutions as constraints tighten. It is impractical, if not impossible, to manually determine ideal choices among possibilities that easily exceed billions even in small-size problems. Existing methods to bridge this gap have leveraged theoretical understanding of hardware to f architecture search. However, the assumptions made in computing such theoretical metrics are too idealized to provide sufficient guidance during the difficult search for a practical implementation. Meanwhile, theoretical estimates for many other crucial metrics (like delay) do not even exist and are similarly variable, dependent on parameters of the process design kit (PDK). To address these challenges, we present a study that employs intelligent search using multi-objective Bayesian optimization, integrating both neural network search and ASIC synthesis in the loop. This approach provides reliable feedback on the collective impact of all cross-domain design choices. We showcase the effectiveness of our approach by finding several Pareto-optimal design choices for effective and efficient neural networks that perform real-time feature extraction from input pulses within the individual pixels of a readout ASIC.


Deep Learning for Gamma-Ray Bursts: A data driven event framework for X/Gamma-Ray analysis in space telescopes

Crupi, Riccardo

arXiv.org Artificial Intelligence

The HERMES (High Energy Rapid Modular Ensemble of Satellites) Pathfinder mission serves as an in-orbit demonstration of a constellation of nanosatellites whose primary scientific purpose is to discover intense high-energy transients, such as gamma-ray bursts, across a broad energy range (few keV to few MeV) with unparalleled temporal precision and exact localisation. By 2024, the first constellation of six nanosatellites is expected to be launched. To fully exploit satellite data and allow faint astronomical events to emerge, a precise estimation of satellite background count rates is required to determine whether the event is statistically valid or not. The dynamics of the background are related to the satellite's orbital information, which varies in the order of minutes, potentially hiding long transient events. This work introduces two main contributions I have brought ahead; first a novel background estimator is presented that could potentially be fitted to any type of X/Gamma-ray satellite space telescope, capable of capturing long-term dynamics and accurate enough to detect faint transients. This estimator is built using a Neural Network and tested on data from the Fermi Gamma-ray Space Telescope's Gamma Burst Monitor (GBM). As a second objective, it is employed a trigger algorithm, called FOCuS (Functional Online CUSUM), to extract events from the background using the background estimator. The resulting framework, DeepGRB, can identify astronomical events that are both present and absent from the Fermi-GBM catalog. The analysis of the discovered events reveals the strengths and weaknesses of the framework.


Automated Chest X-Ray Report Generator Using Multi-Model Deep Learning Approach

Muharram, Arief Purnama, Haryono, Hollyana Puteri, Juma, Abassi Haji, Puspasari, Ira, Utama, Nugraha Priya

arXiv.org Artificial Intelligence

Reading and interpreting chest X-ray images is one of the most radiologist's routines. However, it still can be challenging, even for the most experienced ones. Therefore, we proposed a multi-model deep learning-based automated chest X-ray report generator system designed to assist radiologists in their work. The basic idea of the proposed system is by utilizing multi binary-classification models for detecting multi abnormalities, with each model responsible for detecting one abnormality, in a single image. In this study, we limited the radiology abnormalities detection to only cardiomegaly, lung effusion, and consolidation. The system generates a radiology report by performing the following three steps: image pre-processing, utilizing deep learning models to detect abnormalities, and producing a report. The aim of the image pre-processing step is to standardize the input by scaling it to 128x128 pixels and slicing it into three segments, which covers the upper, lower, and middle parts of the lung. After pre-processing, each corresponding model classifies the image, resulting in a 0 (zero) for no abnormality detected and a 1 (one) for the presence of an abnormality. The prediction outputs of each model are then concatenated to form a 'result code'. The 'result code' is used to construct a report by selecting the appropriate pre-determined sentence for each detected abnormality in the report generation step. The proposed system is expected to reduce the workload of radiologists and increase the accuracy of chest X-ray diagnosis.


Rapid detection of rare events from in situ X-ray diffraction data using machine learning

Zheng, Weijian, Park, Jun-Sang, Kenesei, Peter, Ali, Ahsan, Liu, Zhengchun, Foster, Ian T., Schwarz, Nicholas, Kettimuthu, Rajkumar, Miceli, Antonino, Sharma, Hemant

arXiv.org Artificial Intelligence

High-energy X-ray diffraction methods can non-destructively map the 3D microstructure and associated attributes of metallic polycrystalline engineering materials in their bulk form. These methods are often combined with external stimuli such as thermo-mechanical loading to take snapshots over time of the evolving microstructure and attributes. However, the extreme data volumes and the high costs of traditional data acquisition and reduction approaches pose a barrier to quickly extracting actionable insights and improving the temporal resolution of these snapshots. Here we present a fully automated technique capable of rapidly detecting the onset of plasticity in high-energy X-ray microscopy data. Our technique is computationally faster by at least 50 times than the traditional approaches and works for data sets that are up to 9 times sparser than a full data set. This new technique leverages self-supervised image representation learning and clustering to transform massive data into compact, semantic-rich representations of visually salient characteristics (e.g., peak shapes). These characteristics can be a rapid indicator of anomalous events such as changes in diffraction peak shapes. We anticipate that this technique will provide just-in-time actionable information to drive smarter experiments that effectively deploy multi-modal X-ray diffraction methods that span many decades of length scales.


Machine learning for classifying and interpreting coherent X-ray speckle patterns

Shen, Mingren, Sheyfer, Dina, Loeffler, Troy David, Sankaranarayanan, Subramanian K. R. S., Stephenson, G. Brian, Chan, Maria K. Y., Morgan, Dane

arXiv.org Artificial Intelligence

Speckle patterns produced by coherent X-ray have a close relationship with the internal structure of materials but quantitative inversion of the relationship to determine structure from speckle patterns is challenging. Here, we investigate the link between coherent X-ray speckle patterns and sample structures using a model 2D disk system and explore the ability of machine learning to learn aspects of the relationship. Specifically, we train a deep neural network to classify the coherent X-ray speckle patterns according to the disk number density in the corresponding structure. It is demonstrated that the classification system is accurate for both non-disperse and disperse size distributions.


Online learning for X-ray, CT or MRI

Bhuiyan, Mosabbir, Nasim, MD Abdullah Al, Saif, Sarwar, Gupta, Dr. Kishor Datta, Alam, Md Jahangir, Talukder, Sajedul

arXiv.org Artificial Intelligence

Medical imaging plays an important role in the medical sector in identifying diseases. X-ray, computed tomography (CT) scans, and magnetic resonance imaging (MRI) are a few examples of medical imaging. Most of the time, these imaging techniques are utilized to examine and diagnose diseases. Medical professionals identify the problem after analyzing the images. However, manual identification can be challenging because the human eye is not always able to recognize complex patterns in an image. Because of this, it is difficult for any professional to recognize a disease with rapidity and accuracy. In recent years, medical professionals have started adopting Computer-Aided Diagnosis (CAD) systems to evaluate medical images. This system can analyze the image and detect the disease very precisely and quickly. However, this system has certain drawbacks in that it needs to be processed before analysis. Medical research is already entered a new era of research which is called Artificial Intelligence (AI). AI can automatically find complex patterns from an image and identify diseases. Methods for medical imaging that uses AI techniques will be covered in this chapter.


Accelerated deep self-supervised ptycho-laminography for three-dimensional nanoscale imaging of integrated circuits

Kang, Iksung, Jiang, Yi, Holler, Mirko, Guizar-Sicairos, Manuel, Levi, A. F. J., Klug, Jeffrey, Vogt, Stefan, Barbastathis, George

arXiv.org Artificial Intelligence

Three-dimensional inspection of nanostructures such as integrated circuits is important for security and reliability assurance. Two scanning operations are required: ptychographic to recover the complex transmissivity of the specimen; and rotation of the specimen to acquire multiple projections covering the 3D spatial frequency domain. Two types of rotational scanning are possible: tomographic and laminographic. For flat, extended samples, for which the full 180 degree coverage is not possible, the latter is preferable because it provides better coverage of the 3D spatial frequency domain compared to limited-angle tomography. It is also because the amount of attenuation through the sample is approximately the same for all projections. However, both techniques are time consuming because of extensive acquisition and computation time. Here, we demonstrate the acceleration of ptycho-laminographic reconstruction of integrated circuits with 16-times fewer angular samples and 4.67-times faster computation by using a physics-regularized deep self-supervised learning architecture. We check the fidelity of our reconstruction against a densely sampled reconstruction that uses full scanning and no learning. As already reported elsewhere [Zhou and Horstmeyer, Opt. Express, 28(9), pp. 12872-12896], we observe improvement of reconstruction quality even over the densely sampled reconstruction, due to the ability of the self-supervised learning kernel to fill the missing cone.


Radiology, News, Education, Service

#artificialintelligence

While advances in hybrid imaging and other cutting-edge modalities seem to get all the attention, digital radiography (DR) maintains its solid foundation of diagnostic support for imaging centers, emergency departments, outpatient clinics, and mobile operations worldwide. Based on the myriad of presentations scheduled for RSNA 2019, DR arguably might be the modality that will benefit most from artificial intelligence (AI) and deep-learning algorithms. As DR-specific applications are tested and validated, radiologists could soon turn to AI to generate clinically relevant x-ray reports, diagnose fractures and other ailments throughout the body, visualize motion, and increase the accuracy of their readings. AI also is expected to significantly reduce the time and effort it takes for some of the tasks that are necessary but tedious and time-consuming with digital radiography, such as double reading and confirming normal results. Beyond AI, researchers continue to explore hardware improvements in DR systems and develop new technologies to sharpen image quality and shorten exam and procedure times.


Getting Started with Natural Language Processing in Java

@machinelearnbot

Natural Language Processing (NLP) is used in many applications to provide capabilities that were previously not possible. It involves analyzing text to obtain the intent and meaning, which can then be used to support an application. Using NLP within an application requires a combination of standard Java techniques and often specialized libraries frequently based on models that have been trained. You need to know what is available, how these technologies can be used, and when they should be used. In this course we will cover the essence of NLP using Java.